14 research outputs found
Utterance Classification with Logical Neural Network: Explainable AI for Mental Disorder Diagnosis
In response to the global challenge of mental health problems, we proposes a
Logical Neural Network (LNN) based Neuro-Symbolic AI method for the diagnosis
of mental disorders. Due to the lack of effective therapy coverage for mental
disorders, there is a need for an AI solution that can assist therapists with
the diagnosis. However, current Neural Network models lack explainability and
may not be trusted by therapists. The LNN is a Recurrent Neural Network
architecture that combines the learning capabilities of neural networks with
the reasoning capabilities of classical logic-based AI. The proposed system
uses input predicates from clinical interviews to output a mental disorder
class, and different predicate pruning techniques are used to achieve
scalability and higher scores. In addition, we provide an insight extraction
method to aid therapists with their diagnosis. The proposed system addresses
the lack of explainability of current Neural Network models and provides a more
trustworthy solution for mental disorder diagnosis.Comment: ACL 202
Transport collaboratif homme/humanoĂŻde
Humanoid robots provide many advantages when working together with humans to perform various tasks. Since humans in general have alot of experience in physically collaborating with each other, a humanoid with a similar range of motion and sensing has the potential to do the same.This thesis is focused on enabling humanoids that can do such tasks together withhumans: collaborative humanoids. In particular, we use the example where a humanoid and a human collaboratively carry and transport objectstogether. However, there is much to be done in order to achieve this. Here, we first focus on utilizing vision and haptic information together forenabling better collaboration. More specifically the use of vision-based control together with admittance control is tested as a framework forenabling the humanoid to better collaborate by having its own notion of the task. Next, we detail how walking pattern generators can be designedtaking into account physical collaboration. For this, we create leader and follower type walking pattern generators. Finally,the task of collaboratively carrying an object together with a human is broken down and implemented within an optimization-based whole-bodycontrol framework.Les robots humanoïdes sont les plus appropriés pour travailler en coopération avec l'homme. En effet, puisque les humains sont naturellement habitués à collaborer entre eux, un robot avec des capacités sensorielles et de locomotion semblables aux leurs, sera le plus adapté. Cette thèse vise à rendre les robot humanoïdes capables d'aider l'homme, afin de concevoir des 'humanoïdes collaboratifs'. On considère ici la tâche de transport collaboratif d'objets. D'abord, on montre comment l'utilisation simultanée de vision et de données haptiques peut améliorer la collaboration. Une stratégie combinant asservissement visuel et commande en admittance est proposée, puis validée dans un scénario de transport collaboratif homme/humanoïde.Ensuite, on présente un algorithme de génération de marche, prenant intrinsèquement en compte la collaboration physique. Cet algorithme peut être spécifié suivant que le robot guide (leader) ou soit guidé (follower) lors de la tâche. Enfin, on montre comment le transport collaboratif d'objets peut être réalisé dans le cadre d'un schéma de commande optimale pour le corps complet
Combining visual servoing and walking in an acceleration resolved whole-body control framework
Journées Nationales de la Recherche Humanoïde, Toulouse, FranceThis work aims to create a solution to executing visually guided tasks on a humanoid robot while taking advantage of its floating base. The base framework is an acceleration-resolved weighted Quadratic Programming approach for whole-body control
Visual Servoing for the REEM Humanoid Robot’s Upper Body
Abstract — In this paper, a framework for visual servo control of a humanoid robot’s upper body is presented. The framework is then implemented and tested on the REEM humanoid robot. The implementation is composed of 2 controllers- a head gaze control and a hand position control. The main application is precise manipulation tasks using the hand. For this, the hand controller takes top priority. The head controller is designed to keep both the hand and object in the eye field of view. For robustness, a secondary task of joint limit avoidance is implemented using the redundancy framework and a large projection operator proposed recently. For safety, joint velocity scaling is implemented. The implementation on REEM is done using the ROS and ViSP middleware. The results presented show simulations on Gazebo and experiments on the real robot. Furthermore, results with the real robot show how visual servoing is able to overcome some deficiency in REEM’s kinematic calibration. I
Human-Humanoid Joint Haptic Table Carrying Task with Height Stabilization using Vision
International audienceIn this paper, a first step is taken towards using vision in human-humanoid haptic joint actions. Haptic joint actions are characterized by physical interaction throughout the execution of a common goal. Because of this, most of the focus is on the use of force/torque-based control. However, force/torque information is not rich enough for some tasks. Here, a particular case is shown: height stabilization during table carrying. To achieve this, a visual servoing controller is used to generate a reference trajectory for the impedance controller. The control law design is fully described along with important considerations for the vision algorithm and a framework to make pose estimation robust during the table carrying task of the humanoid robot. We then demonstrate all this by an experiment where a human and the HRP-2 humanoid jointly transport a beam using combined force and vision data to adjust the interaction impedance while at the same time keeping the inclination of the beam horizontal